185 research outputs found

    Reliability

    Full text link
    This special volume of Statistical Sciences presents some innovative, if not provocative, ideas in the area of reliability, or perhaps more appropriately named, integrated system assessment. In this age of exponential growth in science, engineering and technology, the capability to evaluate the performance, reliability and safety of complex systems presents new challenges. Today's methodology must respond to the ever-increasing demands for such evaluations to provide key information for decision and policy makers at all levels of government and industry--problems ranging from international security to space exploration. We, the co-editors of this volume and the authors, believe that scientific progress in reliability assessment requires the development of processes, methods and tools that combine diverse information types (e.g., experiments, computer simulations, expert knowledge) from diverse sources (e.g., scientists, engineers, business developers, technology integrators, decision makers) to assess quantitative performance metrics that can aid decision making under uncertainty. These are highly interdisciplinary problems. The principal role of statistical sciences is to bring statistical rigor, thinking and methodology to these problems.Comment: Published at http://dx.doi.org/10.1214/088342306000000664 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Some Guidelines For Using Nonparametric Methods For Modeling Data From Response Surface Designs

    Get PDF
    Traditional response surface methodology focuses on modeling responses using parametric models with designs chosen to balance cost with adequate estimation of parameters and prediction in the design space. Using nonparametric smoothing to approximate the response surface offers both opportunities as well as problems. This article explores some conditions under which these methods can be appropriately used to increase the flexibility of surfaces modeled. The Box and Draper (1987) printing ink study is considered to illustrate the methods

    A More Efficient Way Of Obtaining A Unique Median Estimate For Circular Data

    Get PDF
    The procedure for computing the sample circular median occasionally leads to a non-unique estimate of the population circular median, since there can sometimes be two or more diameters that divide data equally and have the same circular mean deviation. A modification in the computation of the sample median is suggested, which not only eliminates this non-uniqueness problem, but is computationally easier and faster to work with than the existing alternative

    Effect Of Position Of An Outlier On The Influence Curve Of The Measures Of Preferred Direction For Circular Data

    Get PDF
    Circular or angular data occur in many fields of applied statistics. A common problem of interest in circular data is estimating a preferred direction and its corresponding distribution. It is complicated by the wrap-around effect on the circle, which exists because there is no natural minimum or maximum. The usual statistics employed for linear data are inappropriate for directional data, as they do not account for its circular nature. The robustness of the three common choices for summarizing the preferred direction (the sample circular mean, sample circular median and a circular analog of the Hodges-Lehmann estimator) are evaluated via their influence functions

    \u3ci\u3eI\u3c/i\u3e-optimal or \u3ci\u3eG\u3c/i\u3e-optimal: Do We Have to Choose?

    Get PDF
    When optimizing an experimental design for good prediction performance based on an assumed second order response surface model, it is common to focus on a single optimality criterion, either G-optimality, for best worst-case prediction precision, or I-optimality, for best average prediction precision. In this article, we illustrate how using particle swarm optimization to construct a Pareto front of non-dominated designs that balance these two criteria yields some highly desirable results. In most scenarios, there are designs that simultaneously perform well for both criteria. Seeing alternative designs that vary how they balance the performance of G- and I-efficiency provides experimenters with choices that allow selection of a better match for their study objectives. We provide an extensive repository of Pareto fronts with designs for 17 common experimental scenarios for 2 (design size N = 6 to 12), 3 (N = 10 to 16) and 4 (N = 15, 17, 20) experimental factors. These, when combined with a detailed strategy for how to efficiently analyze, assess, and select between alternatives, provide the reader with the tools to select the ideal design with a tailored balance between G- and I- optimality for their own experimental situations

    How to Host a Data Competition: Statistical Advice for Design and Analysis of a Data Competition

    Full text link
    Data competitions rely on real-time leaderboards to rank competitor entries and stimulate algorithm improvement. While such competitions have become quite popular and prevalent, particularly in supervised learning formats, their implementations by the host are highly variable. Without careful planning, a supervised learning competition is vulnerable to overfitting, where the winning solutions are so closely tuned to the particular set of provided data that they cannot generalize to the underlying problem of interest to the host. This paper outlines some important considerations for strategically designing relevant and informative data sets to maximize the learning outcome from hosting a competition based on our experience. It also describes a post-competition analysis that enables robust and efficient assessment of the strengths and weaknesses of solutions from different competitors, as well as greater understanding of the regions of the input space that are well-solved. The post-competition analysis, which complements the leaderboard, uses exploratory data analysis and generalized linear models (GLMs). The GLMs not only expand the range of results we can explore, they also provide more detailed analysis of individual sub-questions including similarities and differences between algorithms across different types of scenarios, universally easy or hard regions of the input space, and different learning objectives. When coupled with a strategically planned data generation approach, the methods provide richer and more informative summaries to enhance the interpretation of results beyond just the rankings on the leaderboard. The methods are illustrated with a recently completed competition to evaluate algorithms capable of detecting, identifying, and locating radioactive materials in an urban environment.Comment: 36 page

    A genetic algorithm with memory for mixed discrete-continuous design optimization

    Get PDF
    This paper describes a new approach for reducing the number of the fitness function evaluations required by a genetic algorithm (GA) for optimization problems with mixed continuous and discrete design variables. The proposed additions to the GA make the search more effective and rapidly improve the fitness value from generation to generation. The additions involve memory as a function of both discrete and continuous design variables, multivariate approximation of the fitness function in terms of several continuous design variables, and localized search based on the multivariate approximation. The approximation is demonstrated for the minimum weight design of a composite cylindrical shell with grid stiffeners

    A Genetic Algorithm for Mixed Integer Nonlinear Programming Problems Using Separate Constraint Approximations

    Get PDF
    This paper describes a new approach for reducing the number of the fitness and constraint function evaluations required by a genetic algorithm (GA) for optimization problems with mixed continuous and discrete design variables. The proposed additions to the GA make the search more effective and rapidly improve the fitness value from generation to generation.The additions involve memory as a function of both discrete and continuous design variables, and multivariate approximation of the individual functions' responses in terms of several continuous design variables. The approximation is demonstrated for the minimum weight design of a composite cylindrical shell with grid stiffeners

    Old(er) Care home residents and sexual/intimate citizenship

    Get PDF
    Sexuality and intimacy in care homes for older people are overshadowed by concern with prolonging physical and/or psychological autonomy.When sexuality and intimacy have been addressed in scholarship, this can reflect a sexological focus concerned with howto continue sexual activitywithreduced capacity.We reviewthe (Anglophone) academic and practitioner literatures bearing on sexuality and intimacy in relation to older care home residents (though much of this applies to older people generally).We highlight how ageism (or ageist erotophobia), which defines older people as post-sexual, restricts opportunities for the expression of sexuality and intimacy. In doing so, we draw attention to more critical writing that recognises constraints on sexuality and intimacy and indicates solutions to some of the problems identified. We also highlight problems faced by lesbian, gay, bisexual and trans (LGB&T) residents who are doubly excluded from sexual/intimate citizenship because of ageism combined with the heterosexual assumption. Older LGB&T residents/individuals can feel obliged to deny or disguise their identity. We conclude by outlining an agenda for research based on more sociologically informed practitioner-led work
    • …
    corecore